As the number of distributed services (or microservices) of cloud-native applications grows, resource management becomes a challenging task. These applications tend to be user-facing and latency-sensitive, and our goal is to continuously minimize the amount of CPU resources allocated while still satisfying the application latency SLO. Although previous efforts have proposed simple heuristics and sophisticated ML-based techniques, we believe that a practical resource manager should accurately scale CPU resources for diverse applications, with minimum human efforts and operation overheads. To this end, we ask: can we systematically break resource management down to subproblems solvable by practical policies? Based on the notion of CPU-throttle-based performance target, we decouple the mechanisms of SLO feedback and resource control, and implement a two-level framework -- Autothrottle. It combines a lightweight learned controller at the global level, and agile per-microservice controllers at the local level. We evaluate Autothrottle on three microservice applications, with both short-term and 21-day production workload traces. Empirical results show Autothrottle's superior CPU core savings up to 26.21% over the best-performing baselines across applications, while maintaining the latency SLO.
translated by 谷歌翻译
To reproduce the success of text-to-image (T2I) generation, recent works in text-to-video (T2V) generation employ large-scale text-video dataset for fine-tuning. However, such paradigm is computationally expensive. Humans have the amazing ability to learn new visual concepts from just one single exemplar. We hereby study a new T2V generation problem$\unicode{x2014}$One-Shot Video Generation, where only a single text-video pair is presented for training an open-domain T2V generator. Intuitively, we propose to adapt the T2I diffusion model pretrained on massive image data for T2V generation. We make two key observations: 1) T2I models are able to generate images that align well with the verb terms; 2) extending T2I models to generate multiple images concurrently exhibits surprisingly good content consistency. To further learn continuous motion, we propose Tune-A-Video with a tailored Sparse-Causal Attention, which generates videos from text prompts via an efficient one-shot tuning of pretrained T2I diffusion models. Tune-A-Video is capable of producing temporally-coherent videos over various applications such as change of subject or background, attribute editing, style transfer, demonstrating the versatility and effectiveness of our method.
translated by 谷歌翻译
Despite many recent advancements in language modeling, state-of-the-art language models lack grounding in the real world and struggle with tasks involving complex reasoning. Meanwhile, advances in the symbolic reasoning capabilities of AI have led to systems that outperform humans in games like chess and Go (Silver et al., 2018). Chess commentary provides an interesting domain for bridging these two fields of research, as it requires reasoning over a complex board state and providing analyses in natural language. In this work we demonstrate how to combine symbolic reasoning engines with controllable language models to generate chess commentaries. We conduct experiments to demonstrate that our approach generates commentaries that are preferred by human judges over previous baselines.
translated by 谷歌翻译
We present a retrospective on the state of Embodied AI research. Our analysis focuses on 13 challenges presented at the Embodied AI Workshop at CVPR. These challenges are grouped into three themes: (1) visual navigation, (2) rearrangement, and (3) embodied vision-and-language. We discuss the dominant datasets within each theme, evaluation metrics for the challenges, and the performance of state-of-the-art models. We highlight commonalities between top approaches to the challenges and identify potential future directions for Embodied AI research.
translated by 谷歌翻译
VQA是一项雄心勃勃的任务,旨在回答任何与图像有关的问题。但是,实际上,由于用户的需求不断更新,并且该系统必须实施新功能,因此很难为所有人构建这样的系统。因此,持续学习(CL)能力是开发高级VQA系统的必要条件。最近,先锋工作将一个VQA数据集分为不相交的答案集以研究此主题。但是,VQA上的CL不仅涉及标签集的扩展(新答案集)。在将VQA系统部署到新环境(新的视觉场景)以及如何回答需要新功能的问题(新问题类型)时,研究如何回答问题至关重要。因此,我们提出了Clove,这是一个在视觉问题答案上连续学习的基准,其中包含上述两个CL方案的场景和功能收入设置。在方法论方面,VQA和分类的CL之间的主要区别在于,前者还涉及扩大和防止忘记推理机制,而后者则集中在班级表示上。因此,我们提出了一种为CL上量身定制的基于无数据的基于Real-DATA的基于VQA上的方法,称为场景图作为符号重播的提示。它使用一段场景图作为提示,它可以重播伪场景图,以表示过去的图像以及相关的QA对。还提出了一个统一的VQA模型来利用当前和重播数据来增强其质量检查能力。最后,实验结果揭示了丁香的挑战,并证明了我们方法的有效性。数据集和代码将在https://github.com/showlab/clvqa上找到。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
概率程序为生成模型提供了表达性表示语言。给定概率程序,我们对后验推断的任务感兴趣:在给定一组观察到的变量的情况下,估计潜在变量。现有的概率计划中推断技术通常需要选择许多超参数,在计算上是昂贵的,并且/或仅适用于限制类别的程序。在这里,我们将推断作为掩盖语言建模:给定程序,我们生成了一个监督的变量和作业数据集,并随机掩盖了作业的子集。然后,我们训练神经网络以揭示随机值,从而定义了近似后验分布。通过在各种程序中优化单个神经网络,我们可以摊销培训的成本,从而产生“基础”后部能够对新程序进行零弹性推断。基础后验也可以通过优化变异推理目标来微调特定程序和数据集。我们在Stan程序的基准上显示了该方法的功效,零射和微调。
translated by 谷歌翻译
AR眼镜/机器人等智能助手的长期目标是帮助用户以负担得起的现实世界情景,例如“我如何运行微波炉1分钟?”。但是,仍然没有明确的任务定义和合适的基准。在本文中,我们定义了一项名为“负担中心问题驱动的任务完成”的新任务,AI助手应从教学视频和脚本中学习,以指导用户逐步指导用户。为了支持该任务,我们构建了AssistQ,这是一个新的数据集,其中包括531个问答样本,该样本来自100个新电影的第一人称视频。每个问题都应通过从视觉细节(例如按钮的位置)和纹理细节(例如,按/转弯之类的操作)推断出多步导完成。为了解决这一独特的任务,我们开发了一个问题对行为(Q2A)模型,该模型极大地超过了几种基线方法,同时仍然有大量改进的空间。我们希望我们的任务和数据集能够推进Egentric AI助手的发展。我们的项目页面可在以下网址找到:https://showlab.github.io/assistq
translated by 谷歌翻译
语义表示对于视频文本跟踪(VTT)任务具有很大的益处,该任务需要同时对视频中的视频进行分类,检测和跟踪文本。大多数现有方法通过在连续帧中的外观相似性来解决此任务,同时忽略丰富的语义功能。在本文中,我们探讨了具有对语义和视觉表示的对比学习的强大追踪视频文本。相应地,我们介绍了一个具有语义和视觉表示(SVREP)的端到端视频文本跟踪器,它通过在视频序列中利用不同文本之间的视觉和语义关系来检测和跟踪文本。此外,通过轻量级架构,SVREP在保持竞争推断速度的同时实现最先进的性能。具体而言,使用Reset-18的骨干,SVREP实现了$ \ textbf {65.9 \%} $的$ \ textbf {65.9 \%} $,以$ \ textbf {16.7} $ fps,在ICDAR2015上运行(视频)与$ \ textbf {8.6 \%} $提高的数据集比以前的最先进的方法。
translated by 谷歌翻译
对比学习在计算机视觉中取得了相当大的进展,优于一系列下游数据集的监督预测。然而,对比学习各种情况的更好选择?我们展示了两种情况。首先,在足够小的预测预算下,监督ImageNet的预测始终如一地优于八个不同的图像分类数据集相当的对比模型。这表明,比较数百或数千时代的预押方法的常见做法可能不会为这些计算预算有限的人产生可操作的见解。其次,即使有更大的预测预算,我们也可以确定监督学习的任务,也许是因为监督预测的对象偏见使模型更加适应普通腐败和虚假的前景背景相关性。这些结果强调了需要在更广泛的背景和培训制度范围内表征不同预威胁目标的权衡。
translated by 谷歌翻译